50 research outputs found

    Research to Market Transition of Mobile Assistive Technologies for People with Visual Impairments

    Get PDF
    Mobile devices are accessible to people with visual impairments and hence they are convenient platforms to support assistive technologies. Indeed, in the last years many scientifc contributions proposed assistive applications for mobile devices. However, few of these solutions were eventually delivered to end-users, depriving people with disabilities of important assistive tools. The underlying problem is that a number of challenges need to be faced for transitioning assistive mobile applications from research to market. This contribution reports authors\u2019 experience in the academic research and successive distribution of three mobile assistive applications for people with visual impairment. As a general message, we describe the relevant characteristics of the target population, analyze different models of transition from academic research to end-users distribution and show how the transitioning process has a positive impact on research

    An efficient algorithm for minimizing time granularity periodical representations

    Get PDF
    This paper addresses the technical problem of efficiently reducing the periodic representation of a time granularity to its minimal form. The minimization algorithm presented in the paper has an immediate practical application: it allows users to intuitively define granularities (and more generally, recurring events) with algebraic expressions that are then internally translated to mathematical characterizations in terms of minimal periodic sets. Minimality plays a crucial role, since the value of the recurring period has been shown to dominate the complexity when processing periodic sets.

    Sonification of guidance data during road crossing for people with visual impairments or blindness

    Get PDF
    In the last years several solutions were proposed to support people with visual impairments or blindness during road crossing. These solutions focus on computer vision techniques for recognizing pedestrian crosswalks and computing their relative position from the user. Instead, this contribution addresses a different problem; the design of an auditory interface that can effectively guide the user during road crossing. Two original auditory guiding modes based on data sonification are presented and compared with a guiding mode based on speech messages. Experimental evaluation shows that there is no guiding mode that is best suited for all test subjects. The average time to align and cross is not significantly different among the three guiding modes, and test subjects distribute their preferences for the best guiding mode almost uniformly among the three solutions. From the experiments it also emerges that higher effort is necessary for decoding the sonified instructions if compared to the speech instructions, and that test subjects require frequent `hints' (in the form of speech messages). Despite this, more than 2/3 of test subjects prefer one of the two guiding modes based on sonification. There are two main reasons for this: firstly, with speech messages it is harder to hear the sound of the environment, and secondly sonified messages convey information about the "quantity" of the expected movement

    Towards privacy protection in a middleware for context-awareness

    Get PDF
    Privacy is recognized as a fundamental issue for the provision of context-aware services. In this paper we present work in progress regarding the definition of a comprehensive framework for supporting context-aware services while protecting users' privacy. Our proposal is based on a combination of mechanisms for enforcing context-aware privacy policies and k-anonymity. Moreover, our proposed technique involves the use of stereotypes for generalizing precise identity information to the aim of protecting users' privacy

    A Transfer Learning and Explainable Solution to Detect mpox from Smartphones images

    Full text link
    In recent months, the monkeypox (mpox) virus -- previously endemic in a limited area of the world -- has started spreading in multiple countries until being declared a ``public health emergency of international concern'' by the World Health Organization. The alert was renewed in February 2023 due to a persisting sustained incidence of the virus in several countries and worries about possible new outbreaks. Low-income countries with inadequate infrastructures for vaccine and testing administration are particularly at risk. A symptom of mpox infection is the appearance of skin rashes and eruptions, which can drive people to seek medical advice. A technology that might help perform a preliminary screening based on the aspect of skin lesions is the use of Machine Learning for image classification. However, to make this technology suitable on a large scale, it should be usable directly on mobile devices of people, with a possible notification to a remote medical expert. In this work, we investigate the adoption of Deep Learning to detect mpox from skin lesion images. The proposal leverages Transfer Learning to cope with the scarce availability of mpox image datasets. As a first step, a homogenous, unpolluted, dataset is produced by manual selection and preprocessing of available image data. It will also be released publicly to researchers in the field. Then, a thorough comparison is conducted amongst several Convolutional Neural Networks, based on a 10-fold stratified cross-validation. The best models are then optimized through quantization for use on mobile devices; measures of classification quality, memory footprint, and processing times validate the feasibility of our proposal. Additionally, the use of eXplainable AI is investigated as a suitable instrument to both technically and clinically validate classification outcomes.Comment: Submitted to Pervasive and Mobile Computin
    corecore